Goto

Collaborating Authors

 convex regularized estimator


First order expansion of convex regularized estimators

Neural Information Processing Systems

We consider first order expansions of convex penalized estimators in high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function $h$ and the corresponding penalized estimator $\hbeta$, we construct a quantity $\eta$, the first order expansion of $\hbeta$, such that the distance between $\hbeta$ and $\eta$ is an order of magnitude smaller than the estimation error $\|\hat{\beta} - \beta^*\|$. In this sense, the first order expansion $\eta$ can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of $\hat{\beta}$ is asymptotically the same as the risk of $\eta$ which leads to a precise characterization of the MSE of $\hbeta$; this characterization takes a particularly simple form for isotropic design. Such first order expansion also leads to inference results based on $\hat{\beta}$. We provide sufficient conditions for the existence of such first order expansion for three regularizers: the Lasso in its constrained form, the lasso in its penalized form, and the Group-Lasso. The results apply to general loss functions under some conditions and those conditions are satisfied for the squared loss in linear regression and for the logistic loss in the logistic model.


Reviews: First order expansion of convex regularized estimators

Neural Information Processing Systems

The present paper proposes an approximation, based on the first order Taylor expansion of convex regularizer. In the regularized regression setting and under some mild condition on the loss function and the underlying distribution that generates the data, the authors prove that one can replace the regularization term of the regression algorithm by its Taylor approximation and have a guarantee that the solution obtain with this approximation will be close to the original solution (according to the Mahalanobis distance). The authors give then examples of such proxy for square loss and logistic regression and also for Constrained Lasso, Penalized Lasso and Group Lasso. The paper also proposes a discussion where this approach can be useful. Although this paper is a bit technical, it is well written and the result are on my opinion non trivial and interesting.


First order expansion of convex regularized estimators

Neural Information Processing Systems

We consider first order expansions of convex penalized estimators in high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function h and the corresponding penalized estimator \hbeta, we construct a quantity \eta, the first order expansion of \hbeta, such that the distance between \hbeta and \eta is an order of magnitude smaller than the estimation error \ \hat{\beta} - \beta *\ . In this sense, the first order expansion \eta can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of \hat{\beta} is asymptotically the same as the risk of \eta which leads to a precise characterization of the MSE of \hbeta; this characterization takes a particularly simple form for isotropic design.


First order expansion of convex regularized estimators

Bellec, Pierre, Kuchibhotla, Arun

Neural Information Processing Systems

We consider first order expansions of convex penalized estimators in high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function $h$ and the corresponding penalized estimator $\hbeta$, we construct a quantity $\eta$, the first order expansion of $\hbeta$, such that the distance between $\hbeta$ and $\eta$ is an order of magnitude smaller than the estimation error $\ \hat{\beta} - \beta *\ $. In this sense, the first order expansion $\eta$ can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of $\hat{\beta}$ is asymptotically the same as the risk of $\eta$ which leads to a precise characterization of the MSE of $\hbeta$; this characterization takes a particularly simple form for isotropic design.